Functional Hashing for Compressing Neural Networks

نویسندگان

  • Lei Shi
  • Shikun Feng
  • Zhifan Zhu
چکیده

As the complexity of deep neural networks (DNNs) trend to grow to absorb the increasing sizes of data, memory and energy consumption has been receiving more and more attentions for industrial applications, especially on mobile devices. This paper presents a novel structure based on functional hashing to compress DNNs, namely FunHashNN. For each entry in a deep net, FunHashNN uses multiple low-cost hash functions to fetch values in the compression space, and then employs a small reconstruction network to recover that entry. The reconstruction network is plugged into the whole network and trained jointly. FunHashNN includes the recently proposed HashedNets [7] as a degenerated case, and benefits from larger value capacity and less reconstruction loss. We further discuss extensions with dual space hashing and multi-hops. On several benchmark datasets, FunHashNN demonstrates high compression ratios with little loss on prediction accuracy.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Hfh: Homologically Functional Hashing for Compressing Deep Neural Networks

As the complexity of deep neural networks (DNNs) trends to grow to absorb the increasing sizes of data, memory and energy consumption has been receiving more and more attentions for industrial applications, especially on mobile devices. This paper presents a novel structure based on homologically functional hashing to compress DNNs, shortly named as HFH. For each weight entry in a deep net, HFH...

متن کامل

Compressing deep convolutional neural networks in visual emotion recognition

In this paper, we consider the problem of insufficient runtime and memory-space complexities of deep convolutional neural networks for visual emotion recognition. A survey of recent compression methods and efficient neural networks architectures is provided. We experimentally compare the computational speed and memory consumption during the training and the inference stages of such methods as t...

متن کامل

Compressing Deep Neural Networks: A New Hashing Pipeline Using Kac's Random Walk Matrices

The popularity of deep learning is increasing by the day. However, despite the recent advancements in hardware, deep neural networks remain computationally intensive. Recent work has shown that by preserving the angular distance between vectors, random feature maps are able to reduce dimensionality without introducing bias to the estimator. We test a variety of established hashing pipelines as ...

متن کامل

Compressing Neural Networks with the Hashing Trick

As deep nets are increasingly used in applications suited for mobile devices, a fundamental dilemma becomes apparent: the trend in deep learning is to grow models to absorb everincreasing data set sizes; however mobile devices are designed with very little memory and cannot store such large models. We present a novel network architecture, HashedNets, that exploits inherent redundancy in neural ...

متن کامل

Neural Network Sensitivity to Inputs and Weights and its Application to Functional Identification of Robotics Manipulators

Neural networks are applied to the system identification problems using adaptive algorithms for either parameter or functional estimation of dynamic systems. In this paper the neural networks' sensitivity to input values and connections' weights, is studied. The Reduction-Sigmoid-Amplification (RSA) neurons are introduced and four different models of neural network architecture are proposed and...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1605.06560  شماره 

صفحات  -

تاریخ انتشار 2016